ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Tomorrow’s managers could be digital – and we need to make sure they’re accountable

Linked InTwitterFacebook

Artificial intelligence (AI) has played a significant role in shaping how we work in recent years. Particularly noteworthy is the rise of algorithmic management, which leverages machine learning and automation to optimise efficiency, manage tasks and reshape traditional roles through data-driven decision-making processes.

 

Instead of human supervisors, these algorithms may organise work schedules, assign work and evaluate employee performance against predefined rules. In many ways, it’s like having a digital manager overseeing operations and making decisions on behalf of employees.

 

Although algorithmic management offers many benefits, it can introduce some challenges. For example, while these systems can improve productivity, they could run the risk of reducing workers to mere task executors, with little room for autonomy or creative input.

 

As such, integration of algorithmic management should be carefully considered in a way that respects human diversity and avoids negatively impacting employees. To do that, organisations must first understand both the advantages and pitfalls.

 

The AI paradox

 

While algorithmic management systems may boost productivity, the level of oversight they require – extending from monitoring lunch breaks to informal interactions – poses implications for employee wellbeing. Continuous monitoring of employees can heighten stress levels and diminish job satisfaction, potentially fostering feelings of distrust and even leading to burnout.

 

Moreover, the use of AI-driven keyword filtering in job application processes can inadvertently disqualify candidates who are otherwise well-suited for a role. Balancing automation with human judgment is critical to ensure fair and effective recruitment practices.

 

The transition to remote work further complicates matters, as algorithms may not adequately account for the unique challenges of working from home, such as family interruptions. This can result in misunderstandings regarding an employee’s work ethic.

 

Finally, algorithmic management has the potential to minimise traditional roles to their simplest functions, where employees essentially act as overseers of automated systems. In this scenario, job responsibilities may take on a robotic quality, with every action subject to data-driven scrutiny. This can lead to an environment where employees feel pressured to maintain constant availability and focus, adopting behaviour akin to that of machines.

 

AI’s impact on employment is expanding beyond replacing manual labour, to include intellectual tasks such as customer service and data analysis. Unlike previous technological advancements, AI has the capacity to manage intellectual work, potentially leaving humans with more straightforward manual or emotional tasks. This seems paradoxical, with machines emulating human cognitive functions while humans adapt to the rigid standards set by these machines.

 

Despite the challenges, the good news is that organisations can successfully integrate AI into the workforce and enjoy the benefits of doing so, by prioritising ethics and ensuring that humans retain ultimate oversight.

 

Strategies for workforce harmony

 

To effectively integrate algorithmic management, business leaders should consider strategies such as the Centaur model and worker-led co-design, which can prevent negative implications on the workforce.

 

The Centaur model advocates for a collaborative partnership between AI and humans, enabling both entities to contribute their unique strengths to achieve mutual objectives while maintaining clear role distinctions. In this model, humans remain involved, making strategic decisions and providing creative input, leaving data analysis, computational and routine tasks to AI.

 

In customer service, for instance, chatbots could manage routine queries, allowing human agents to focus on more complex issues. This optimises efficiency while enhancing human capabilities, keeping the distinction between human and AI roles.

 

Similarly, worker-led co-design involves employees in the initial development phases of algorithmic systems, ensuring alignment with real-world demands and concerns. This not only enhances fairness and transparency, but also aligns with the needs of the workforce. Co-design workshops can be arranged to gather employee insights into the nuances of a job and ethical or practical considerations. By taking these steps, organisations can reduce the risk of employee dissatisfaction stemming from excessive surveillance.

 

Transparency, governance and accountability

 

By prioritising human wellbeing and addressing issues such as bias, fairness and surveillance in the workplace, clear guidelines can be established to ensure that AI-powered processes augment rather than replace human capabilities.

 

Business leaders should communicate clearly to help employees understand exactly how algorithms will impact their work and emphasise the capability to override decisions made by algorithms when needed. In the same vein, training will be crucial to equip employees with the knowledge to handle data and algorithms responsibly. This includes robust training on algorithmic management’s ethical implications.

 

To assess its impact on workers and the organisation, regular audits of AI systems are also necessary. These audits provide insights for continuous improvement and ensure alignment with the needs of employees.

 

Finally, organisations must evaluate the psychological and societal impacts of workplace automation. This includes advocating for mental health support and promoting a healthy work-life balance to ensure that solutions are benefitting everyone.

 

The future of work

 

Today, we are at a crossroads, as AI advancements present both opportunities and challenges. It’s important to strike a balance that maximises innovation while maintaining human dignity – and above all, autonomy.

 

For now, business leaders should focus on strategies that enable AI tools to serve the needs of employees first, helping to secure a workplace of the future that enriches and values human experience.


By Eleanor Watson, IEEE Senior Member, AI ethics engineer and AI Faculty at Singularity University

Linked InTwitterFacebook
Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings